Why I rarely (if ever) recommend anything but a 50/50 split test in email marketing
One of the most common “shortcuts” I see in email marketing is this:
Test two variants on 10% each of your audience. Send the winner to the remaining 80%.
Sounds efficient, right?
But in truth, this 10/10/80 split is a gamble. It cuts corners and can do more harm than good, especially if your goal is to make smarter decisions, not just move fast.
I almost exclusively use a 50/50 split when running email A/B tests. Here’s why:
5 reasons to use 50/50 split when testing email campaigns
1. You need volume AND time to measure real success (clicks & conversions)
Let’s be clear: opens are not a valid success metric.
With privacy changes like Apple Mail Privacy Protection and the general unreliability of open tracking, performance should be judged by clicks or conversions – the actions that actually drive value.
But clicks and conversions don’t happen immediately. People click later. They convert hours—or even days—after the email lands in their inbox.
A 50/50 split gives both variants:
- Enough volume to reach statistical significance
- Enough time for meaningful engagement and conversion data to emerge
A rushed 10/10/80 test simply can’t give you that clarity.
2. Premature “Winners” are often false positives
In a typical 10/10/80 test, marketers are eager to act fast—often choosing a “winner” within an hour or two of send time.
But if you’re basing that decision on early engagement, you’re gambling with your results. Many of those so-called “wins” are nothing more than random variance, timing quirks, or noise.
And if you get it wrong, you’re sending a weaker version to the majority of your list—and learning nothing useful in the process.
3. Send-Time Bias skews results
The 10/10/80 model often means the test goes out in the morning, and the “winner” is deployed to the rest of the audience a few hours later.
That delay introduces send-time bias. Maybe one version performed better simply because it landed in inboxes when more people were checking their email—not because it was genuinely more effective.
A 50/50 test eliminates this variable by sending both variants at the same time, creating a cleaner, fairer comparison.
4. Close results in clicks or conversions are incredibly valuable
Marketers often crave a runaway win—where one version outperforms the other by 20% or more. But those kinds of blowouts are rare.
More often, your email A/B tests will result in tighter margins—like a 4% lift in clicks or a 6% boost in conversions. Some might dismiss that as insignificant. I see it as pure gold—if the test was run properly.
With a 50/50 test, you’ve got a big enough, balanced sample to confidently learn from those narrow differences.
And this is where the principle of the Aggregation of Marginal Gains shines.
Coined by British Cycling’s Dave Brailsford, this concept is about making small improvements across the board—refining bit by bit. Applied to email? It looks like:
- Improving your CTA by 4%
- Getting 5% more clicks with a layout tweak
- Lifting conversions by 6% with more persuasive copy
Apply these learnings to future campaigns. Each change on its own might seem minor—but stacked across dozens of campaigns? You achieve significant programme-wide uplift.
Close results fuel that process. And only 50/50 testing lets you detect them with confidence.
5. It’s about programme-level learning, not just campaign wins
The Holistic Testing Methodology is built around this core belief: testing is not about picking a winner, it’s about generating insight.
When you treat each A/B test in your email programme as a stepping stone — feeding into the next hypothesis, the next campaign, the next learning— you build long-term strategic advantage.
You understand your audience better. You write stronger copy. You improve performance consistently, campaign after campaign.
That kind of insight doesn’t come from 10/10/80 tests. It comes from testing properly—with patience, rigour, and intent.
TL;DR (For the skimmers in the inbox):
- 10/10/80 testing is fast but flawed.
- Clicks and conversions—not opens—are your success metrics.
- These take time and volume to track accurately.
- Only a 50/50 split gives you the clarity, confidence, and statistical power to learn.
- Real results don’t come from blowouts, they come from marginal gains, compounded over time.
Want your email tests to actually work?
If your email testing feels like guesswork—or if you’re stuck in a cycle of “winner-picking” without long-term improvement—let’s change that.
I’ll help you implement a Holistic Testing Framework that builds real learning into your email programme, so you stop reacting and start evolving.